Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Garment image recognition based on adaptive pooling neural network
HU Cong, QU Jinjin, XU Chuanpei, ZHU Aijun
Journal of Computer Applications    2018, 38 (8): 2211-2217.   DOI: 10.11772/j.issn.1001-9081.2018010223
Abstract535)      PDF (1133KB)(396)       Save
Focusing on the issue that traditional pooling methods cannot extract valid eigenvalues, an adaptive pooling method was proposed to adjust the pooling results based on the size of the pooling domain, the element values in the pooling domain and the network training rounds. A function based on the interpolation principle and the maximum pooling model was constructed, and the specific function value was used as the pooling result. Then the cross-validation method was used for the comparative experiment of the model. Focusing on the issue that the hyperparameter selection based on empirical values to verify on all datasets is inefficient, a small sample tuning method was proposed. On the original data set, small samples were extracted according to the rule of stratified sampling, and the encoded hyperparameter combinations were cyclically trained and tested based on the small data set. The optimal hyperparameters were identified by decoding the combination with the highest recognition. Using DeepFashion dataset for the related experiments, the results show that the recognition rate of adaptive pooling model is about 83%, which is about 2.5% higher than that of the maximum pooling model. Hyperparameters selected by small samples were compared though experiments with random combinations of hyperparameters on the original dataset. The results show that the hyperparameters selected by the small sample tuning method are optimal within the empirical value range. The recognition result is 86.98%, which is about 41.4% higher than the average recognition rate of the hyperparameters random combinations. The adaptive pooling method can be extended to other neural networks, and the small sample tuning method provides a basis for efficient selection of hyperparameters of deep neural networks.
Reference | Related Articles | Metrics
Parallel design and implementation of scale invariant feature transform algorithm based on OpenCL
XU Chuanpei, WANG Guang
Journal of Computer Applications    2016, 36 (7): 1801-1806.   DOI: 10.11772/j.issn.1001-9081.2016.07.1801
Abstract536)      PDF (966KB)(343)       Save
The real-time performance of Scale Invariant Feature Transform (SIFT) algorithm is excessively bad. To solve the problem, a parallel optimized SIFT algorithm using the Open Computing Language (OpenCL) was proposed. Firstly, all steps of the original algorithm were split and combined; in addition, the indexing method of feature points in memory was restructured. Thus the middle calculation results could be made completely to finish interaction in the memory. Then, each step of the original algorithm was designed in parallel to improve the efficiency of data reading and reduce the transmission delay by multiplexing global memory object, sharing local memory and optimizing memory access. Finally, a fine-grained parallel accelerated SIFT algorithm was completed on Graphics Processing Unit (GPU) platform using OpenCL and the transplant was completed on the Central Processing Unit (CPU) platform. The parallel algorithm speeded up 10.51-19.33 and 2.34-4.74 times in feature extraction on GPU and CPU platform when the registration result was close to the original algorithm. The experimental results show that the parallel accelerated SIFT algorithm using OpenCL can improve the real-time performance of image registration and overcome the disadvantages of that Compute Unified Device Architecture (CUDA) is difficult to be transplanted so that it can not make full use of the multiple computing cores in heterogeneous systems.
Reference | Related Articles | Metrics